13 found
Order:
  1. The meta-inductive justification of induction.Tom F. Sterkenburg - 2020 - Episteme 17 (4):519-541.
    I evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that rests on results from the machine learning branch of prediction with expert advice. My conclusion is that the argument, suitably explicated, comes remarkably close to its grand aim: an actual justification of induction. This finding, however, is subject to two main qualifications, and still disregards one important challenge. The first qualification concerns the empirical success of induction. Even though, I argue, Schurz's argument does not need (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  2.  68
    The no-free-lunch theorems of supervised learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  3.  49
    The Metainductive Justification of Induction: The Pool of Strategies.Tom F. Sterkenburg - 2019 - Philosophy of Science 86 (5):981-992.
    This article poses a challenge to Schurz’s proposed metainductive justification of induction. It is argued that Schurz’s argument requires a notion of optimality that can deal with an expanding pool of prediction strategies.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  4.  59
    On Explaining the Success of Induction.Tom F. Sterkenburg - forthcoming - British Journal for the Philosophy of Science.
    Douven (in press) observes that Schurz's meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this paper, I argue that Douven's account does not address the explanatory question that Schurz's argument leaves open, and that the assumption of the environment's induction-friendliness that is inherent to Douven's simulations is not justified by Schurz's argument.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  5.  91
    Universal Prediction: A Philosophical Investigation.Tom F. Sterkenburg - 2018 - Dissertation, University of Groningen
    In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  6. Putnam’s Diagonal Argument and the Impossibility of a Universal Learning Machine.Tom F. Sterkenburg - 2019 - Erkenntnis 84 (3):633-656.
    Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin proposal is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  7.  93
    Solomonoff Prediction and Occam’s Razor.Tom F. Sterkenburg - 2016 - Philosophy of Science 83 (4):459-479.
    Algorithmic information theory gives an idealized notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam’s razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  8.  21
    Commentary on David Watson, “On the Philosophy of Unsupervised Learning,” Philosophy & Technology.Tom F. Sterkenburg - 2023 - Philosophy and Technology 36 (4):1-5.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Peirce, Pedigree, Probability.Rush T. Stewart & Tom F. Sterkenburg - 2022 - Transactions of the Charles S. Peirce Society 58 (2):138-166.
    An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of pedigree in the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  63
    On the truth-convergence of open-minded bayesianism.Tom F. Sterkenburg & Rianne de Heide - 2022 - Review of Symbolic Logic 15 (1):64-100.
    Wenmackers and Romeijn (2016) formalize ideas going back to Shimony (1970) and Putnam (1963) into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  24
    On characterizations of learnability with computable learners.Tom F. Sterkenburg - 2022 - Proceedings of Machine Learning Research 178:3365-3379.
    We study computable PAC (CPAC) learning as introduced by Agarwal et al. (2020). First, we consider the main open question of finding characterizations of proper and improper CPAC learning. We give a characterization of a closely related notion of strong CPAC learning, and provide a negative answer to the COLT open problem posed by Agarwal et al. (2021) whether all decidably representable VC classes are improperly CPAC learnable. Second, we consider undecidability of (computable) PAC learnability. We give a simple general (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  11
    A generalized characterization of algorithmic probability.Tom F. Sterkenburg - 2017 - Theory of Computing Systems 61 (4):1337-1352.
    An a priori semimeasure (also known as “algorithmic probability” or “the Solomonoff prior” in the context of inductive inference) is defined as the transformation, by a given universal monotone Turing machine, of the uniform measure on the infinite strings. It is shown in this paper that the class of a priori semimeasures can equivalently be defined as the class of transformations, by all compatible universal monotone Turing machines, of any continuous computable measure in place of the uniform measure. Some consideration (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  36
    Deborah G. Mayo: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars. [REVIEW]Tom F. Sterkenburg - 2020 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 51 (3):507-510.